INTRODUCCIÓN

PISA (The Programme for International Student Assessment) es un programa de evaluación para los estudiantes internacionales. PISA hace una encuesta que ocurre y se repite cada tres años sobre los conocimientos y habilidades de los jóvenes de 15 años. Los resultados se obtienen entre los países participantes y la economía a través de la OECD (Organisation for Economic Cooperation and Development). OECD se basa en el desarrolo de comparaciones entre países y culturas. Más de 400.000 estudiantes de 57 países que representan cerca del 90% de la economía mundial, participó en PISA 2006. Empezaron centrandose en la ciencia, pero más adelante la evaluación de los datos recopilados sobre los estudiantes, familias e instituciones hicieron explicar con los resultados de los estudios estadísticos las diferencias en el rendimiento de los estudiantes.

OBJETIVO

El objetivo es modelizar la relación entre la puntuación media (OSS) y el resto de variables, utilizando modelos de splines y GAM. Análisis de los datos que se encuentran en el fichero pisasci2006.csv. Evaluación de los diferentes modelos GAM y splines con sus respectivos gráficos.

LIBRERÍAS

Importación de librerÍas necesarias para el caso práctico.

library(readr)
library(skimr)
library(ggplot2)
library(tidyverse)
## ── Attaching packages ─────────────────────────────────────────────────────────────────────── tidyverse 1.3.0 ──
## ✓ tibble  3.0.3     ✓ dplyr   1.0.2
## ✓ tidyr   1.1.2     ✓ stringr 1.4.0
## ✓ purrr   0.3.4     ✓ forcats 0.5.0
## ── Conflicts ────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
library(broom) # modelos en df
library(flextable) # Tablas formateadas
## 
## Attaching package: 'flextable'
## The following object is masked from 'package:purrr':
## 
##     compose
library(mgcv) # estimar gam
## Loading required package: nlme
## 
## Attaching package: 'nlme'
## The following object is masked from 'package:dplyr':
## 
##     collapse
## This is mgcv 1.8-33. For overview type 'help("mgcv-package")'.
library(reshape2) # melt
## 
## Attaching package: 'reshape2'
## The following object is masked from 'package:tidyr':
## 
##     smiths
library(splines)

VARIABLES CLAVES:

  • Overall Science Score (average score for 15 year olds)

  • Interest in science

  • Support for scientific inquiry

  • Income Index

  • Health Index

  • Education Index

  • Human Development Index (composed of the Income index, Health Index, and Education Index)

DATASET

pisa <- read_csv("pisasci2006.csv") 
## Parsed with column specification:
## cols(
##   Country = col_character(),
##   Overall = col_double(),
##   Issues = col_double(),
##   Explain = col_double(),
##   Evidence = col_double(),
##   Interest = col_double(),
##   Support = col_double(),
##   Income = col_double(),
##   Health = col_double(),
##   Edu = col_double(),
##   HDI = col_double()
## )
View(pisa)

SUMMARIZE DATA

skim(pisa)
Data summary
Name pisa
Number of rows 65
Number of columns 11
_______________________
Column type frequency:
character 1
numeric 10
________________________
Group variables None

Variable type: character

skim_variable n_missing complete_rate min max empty n_unique whitespace
Country 0 1 4 24 0 65 0

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
Overall 8 0.88 473.14 54.58 322.00 428.00 489.00 513.00 563.00 ▁▃▂▇▅
Issues 8 0.88 469.91 53.93 321.00 427.00 489.00 514.00 555.00 ▁▃▂▇▆
Explain 8 0.88 475.02 54.02 334.00 432.00 490.00 517.00 566.00 ▁▃▂▇▃
Evidence 8 0.88 469.81 61.74 288.00 423.00 489.00 515.00 567.00 ▁▂▃▇▆
Interest 8 0.88 528.16 49.84 448.00 501.00 522.00 565.00 644.00 ▆▇▅▃▂
Support 8 0.88 512.18 26.08 447.00 494.00 512.00 529.00 569.00 ▂▅▇▆▂
Income 4 0.94 0.74 0.11 0.41 0.66 0.76 0.83 0.94 ▁▃▇▇▆
Health 4 0.94 0.89 0.07 0.72 0.84 0.89 0.94 0.99 ▂▂▇▆▇
Edu 6 0.91 0.80 0.11 0.54 0.72 0.81 0.88 0.99 ▂▅▆▇▅
HDI 6 0.91 0.81 0.09 0.58 0.75 0.82 0.88 0.94 ▁▃▇▇▇

MODELO GAM 0

gam_model0 <- gam(Overall ~ Interest + s(Support) + s(Income) + s(Health) + 
                     s(Edu) + s(HDI), data = pisa, na.action = na.exclude)

summary(gam_model0)
## 
## Family: gaussian 
## Link function: identity 
## 
## Formula:
## Overall ~ Interest + s(Support) + s(Income) + s(Health) + s(Edu) + 
##     s(HDI)
## 
## Parametric coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 653.0796    80.6353   8.099 4.96e-09 ***
## Interest     -0.3447     0.1527  -2.257   0.0315 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##              edf Ref.df     F p-value    
## s(Support) 1.620  2.006 0.976  0.3870    
## s(Income)  7.878  8.447 6.955 3.1e-05 ***
## s(Health)  1.000  1.000 0.115  0.7368    
## s(Edu)     6.365  7.275 2.511  0.0438 *  
## s(HDI)     3.229  4.180 0.888  0.4562    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-sq.(adj) =   0.89   Deviance explained = 93.6%
## GCV = 555.12  Scale est. = 319.28    n = 52
par(mfrow = c(2, 3))
plot(gam_model0, se = TRUE, col = 'red')

gam.check(gam_model0)

## 
## Method: GCV   Optimizer: magic
## Smoothing parameter selection converged after 22 iterations.
## The RMS GCV score gradient at convergence was 2.202292e-05 .
## The Hessian was positive definite.
## Model rank =  47 / 47 
## 
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
## 
##              k'  edf k-index p-value
## s(Support) 9.00 1.62      NA      NA
## s(Income)  9.00 7.88      NA      NA
## s(Health)  9.00 1.00      NA      NA
## s(Edu)     9.00 6.36      NA      NA
## s(HDI)     9.00 3.23      NA      NA

MODELO GAM 1

gam_model1 <- gam(Overall ~ s(Interest) + Support + s(Income) + s(Health) + 
                   s(Edu) + s(HDI), data = pisa, na.action = na.exclude)

summary(gam_model1)
## 
## Family: gaussian 
## Link function: identity 
## 
## Formula:
## Overall ~ s(Interest) + Support + s(Income) + s(Health) + s(Edu) + 
##     s(HDI)
## 
## Parametric coefficients:
##             Estimate Std. Error t value Pr(>|t|)   
## (Intercept) 379.0169   105.2806   3.600  0.00113 **
## Support       0.1803     0.2060   0.875  0.38832   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##               edf Ref.df     F  p-value    
## s(Interest) 1.816  2.257 2.452   0.0878 .  
## s(Income)   7.579  8.234 6.297 7.95e-05 ***
## s(Health)   1.000  1.000 0.267   0.6090    
## s(Edu)      6.653  7.503 2.613   0.0422 *  
## s(HDI)      2.987  3.881 0.884   0.5128    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-sq.(adj) =   0.89   Deviance explained = 93.5%
## GCV = 558.27  Scale est. = 321.71    n = 52
par(mfrow = c(2, 3))
plot(gam_model1, se = TRUE, col = 'blue')

gam.check(gam_model1)

## 
## Method: GCV   Optimizer: magic
## Smoothing parameter selection converged after 24 iterations.
## The RMS GCV score gradient at convergence was 3.238245e-05 .
## The Hessian was positive definite.
## Model rank =  47 / 47 
## 
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
## 
##               k'  edf k-index p-value
## s(Interest) 9.00 1.82      NA      NA
## s(Income)   9.00 7.58      NA      NA
## s(Health)   9.00 1.00      NA      NA
## s(Edu)      9.00 6.65      NA      NA
## s(HDI)      9.00 2.99      NA      NA

MODELO GAM 2

gam_model2 <- gam(Overall ~ s(Interest) + s(Support) + Income + s(Health) + 
                   s(Edu) + s(HDI), data = pisa, na.action = na.exclude)

summary(gam_model2)
## 
## Family: gaussian 
## Link function: identity 
## 
## Formula:
## Overall ~ s(Interest) + s(Support) + Income + s(Health) + s(Edu) + 
##     s(HDI)
## 
## Parametric coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   3404.7      656.2   5.188 1.20e-05 ***
## Income       -3923.9      877.8  -4.470 9.49e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##               edf Ref.df     F  p-value    
## s(Interest) 2.116  2.638 3.376 0.031169 *  
## s(Support)  1.322  1.535 0.516 0.437583    
## s(Health)   2.640  3.262 7.243 0.000684 ***
## s(Edu)      7.095  8.006 4.919 0.000563 ***
## s(HDI)      5.384  6.267 6.404 0.000152 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## R-sq.(adj) =  0.861   Deviance explained = 91.4%
## GCV = 669.26  Scale est. = 404.69    n = 52
par(mfrow = c(2, 3))
plot(gam_model2, se = TRUE, col = 'purple')

gam.check(gam_model2)

## 
## Method: GCV   Optimizer: magic
## Smoothing parameter selection converged after 11 iterations.
## The RMS GCV score gradient at convergence was 0.002469787 .
## The Hessian was positive definite.
## Model rank =  47 / 47 
## 
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
## 
##               k'  edf k-index p-value
## s(Interest) 9.00 2.12      NA      NA
## s(Support)  9.00 1.32      NA      NA
## s(Health)   9.00 2.64      NA      NA
## s(Edu)      9.00 7.09      NA      NA
## s(HDI)      9.00 5.38      NA      NA

MODELO GAM 3

gam_model3 <- gam(Overall ~ s(Interest) + s(Support) + s(Income) + Health + 
                   s(Edu) + s(HDI), data = pisa, na.action = na.exclude)

summary(gam_model3)
## 
## Family: gaussian 
## Link function: identity 
## 
## Formula:
## Overall ~ s(Interest) + s(Support) + s(Income) + Health + s(Edu) + 
##     s(HDI)
## 
## Parametric coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   339.23      26.55  12.778 8.59e-14 ***
## Health        147.83      29.72   4.973 2.40e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##               edf Ref.df     F  p-value    
## s(Interest) 1.000  1.000 5.207   0.0295 *  
## s(Support)  1.637  2.023 1.059   0.3569    
## s(Income)   7.915  8.462 7.731 1.11e-05 ***
## s(Edu)      6.382  7.288 3.272   0.0113 *  
## s(HDI)      3.441  4.375 2.814   0.0305 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Rank: 46/47
## R-sq.(adj) =  0.892   Deviance explained = 93.5%
## GCV = 536.15  Scale est. = 314.98    n = 52
par(mfrow = c(2, 3))
plot(gam_model3, se = TRUE, col = 'orange')

gam.check(gam_model3)

## 
## Method: GCV   Optimizer: magic
## Smoothing parameter selection converged after 25 iterations.
## The RMS GCV score gradient at convergence was 0.0005505098 .
## The Hessian was positive definite.
## Model rank =  46 / 47 
## 
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
## 
##               k'  edf k-index p-value
## s(Interest) 9.00 1.00      NA      NA
## s(Support)  9.00 1.64      NA      NA
## s(Income)   9.00 7.92      NA      NA
## s(Edu)      9.00 6.38      NA      NA
## s(HDI)      9.00 3.44      NA      NA

MODELO GAM 4

gam_model4 <- gam(Overall ~ s(Interest) + s(Support) + s(Income) + s(Health) + 
                   Edu + s(HDI), data = pisa, na.action = na.exclude)

summary(gam_model4)
## 
## Family: gaussian 
## Link function: identity 
## 
## Formula:
## Overall ~ s(Interest) + s(Support) + s(Income) + s(Health) + 
##     Edu + s(HDI)
## 
## Parametric coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   331.46      11.93   27.79  < 2e-16 ***
## Edu           172.52      14.61   11.81 2.01e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##                edf Ref.df     F  p-value    
## s(Interest) 1.0000 1.0000 6.774    0.013 *  
## s(Support)  1.6252 2.0325 0.977    0.388    
## s(Income)   7.1186 8.0928 6.456 2.19e-05 ***
## s(Health)   1.4137 1.7150 0.193    0.754    
## s(HDI)      0.9646 0.9646 4.130    0.053 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Rank: 46/47
## R-sq.(adj) =  0.833   Deviance explained = 87.3%
## GCV = 653.86  Scale est. = 488.14    n = 52
par(mfrow = c(2, 3))
plot(gam_model4, se = TRUE, col = 'yellow')

gam.check(gam_model4)

## 
## Method: GCV   Optimizer: magic
## Smoothing parameter selection converged after 19 iterations.
## The RMS GCV score gradient at convergence was 1.791789e-05 .
## The Hessian was positive definite.
## Model rank =  46 / 47 
## 
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
## 
##                k'   edf k-index p-value
## s(Interest) 9.000 1.000      NA      NA
## s(Support)  9.000 1.625      NA      NA
## s(Income)   9.000 7.119      NA      NA
## s(Health)   9.000 1.414      NA      NA
## s(HDI)      9.000 0.965      NA      NA

MODELO GAM 5

gam_model5 <- gam(Overall ~ s(Interest) + s(Support) + s(Income) + s(Health) + 
                   s(Edu) + HDI, data = pisa, na.action = na.exclude)

summary(gam_model5)
## 
## Family: gaussian 
## Link function: identity 
## 
## Formula:
## Overall ~ s(Interest) + s(Support) + s(Income) + s(Health) + 
##     s(Edu) + HDI
## 
## Parametric coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  271.789      7.374   36.86   <2e-16 ***
## HDI          245.278      8.971   27.34   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Approximate significance of smooth terms:
##                edf Ref.df     F p-value    
## s(Interest) 6.1121 7.0972 2.053  0.0825 .  
## s(Support)  1.0000 1.0000 1.111  0.3005    
## s(Income)   7.4788 8.2642 6.487 6.7e-05 ***
## s(Health)   0.9998 0.9998 5.214  0.0299 *  
## s(Edu)      6.7973 7.6128 2.523  0.0364 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Rank: 46/47
## R-sq.(adj) =  0.896   Deviance explained = 94.2%
## GCV = 551.94  Scale est. = 303.65    n = 52
par(mfrow = c(2, 3))
plot(gam_model5, se = TRUE, col = 'green')

gam.check(gam_model5)

## 
## Method: GCV   Optimizer: magic
## Smoothing parameter selection converged after 23 iterations.
## The RMS GCV score gradient at convergence was 0.0003443772 .
## The Hessian was positive definite.
## Model rank =  46 / 47 
## 
## Basis dimension (k) checking results. Low p-value (k-index<1) may
## indicate that k is too low, especially if edf is close to k'.
## 
##               k'  edf k-index p-value
## s(Interest) 9.00 6.11      NA      NA
## s(Support)  9.00 1.00      NA      NA
## s(Income)   9.00 7.48      NA      NA
## s(Health)   9.00 1.00      NA      NA
## s(Edu)      9.00 6.80      NA      NA

SPLIT

Podemos observar el Spline de cada uno.

Interest

Spline de Interest:

gam_int_k50 <- gam(Overall ~ s(Interest, k = 50), data = pisa)
plot(gam_int_k50, residuals = TRUE, pch = 1)

Support

Spline de Support:

gam_sup_k30 <- gam(Overall ~ s(Support, k = 30), data = pisa)
plot(gam_sup_k30, residuals = TRUE, pch = 1)

Income

Spline de Income:

gam_inc_k50 <- gam(Overall ~ s(Income, k = 50), data = pisa)
plot(gam_inc_k50, residuals = TRUE, pch = 1)

Health

Spline de Health:

gam_hea_k50 <- gam(Overall ~ s(Health, k = 50), data = pisa)
plot(gam_hea_k50, residuals = TRUE, pch = 1)

Edu

Spline de Edu:

gam_edu_k50 <- gam(Overall ~ s(Edu, k = 50), data = pisa)
plot(gam_edu_k50, residuals = TRUE, pch = 1)

HDI

Spline de HDI:

gam_hdi_k20 <- gam(Overall ~ s(HDI, k = 20), data = pisa)
plot(gam_hdi_k20, residuals = TRUE, pch = 1)

Los Residuos Del Modelo GAM

Representación

Modelo 0

plot(gam_model0, residuals = TRUE, pch = 1)
## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor
## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

Modelo 1

plot(gam_model1, residuals = TRUE, pch = 1)
## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor
## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

Modelo 2

plot(gam_model2, residuals = TRUE, pch = 1)
## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor
## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

Modelo 3

plot(gam_model3, residuals = TRUE, pch = 1)
## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor
## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

Modelo 4

plot(gam_model4, residuals = TRUE, pch = 1)
## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor
## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

Modelo 5

plot(gam_model5, residuals = TRUE, pch = 1)
## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor

## Warning in fv.terms[, length(order) + i] + w.resid: longitud de objeto mayor no
## es múltiplo de la longitud de uno menor
## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

## Warning in plot.mgcv.smooth(x$smooth[[i]], P = pd[[i]], partial.resids =
## partial.resids, : Partial residuals do not have a natural x-axis location for
## linear functional terms

Enlace GitHub

Enlace de acceso al repository GitHub con el código: https://github.com/martaruedas/CP03_PISA.git